3D motion tracking using multiple cameras with OpenPose and Pose2Sim¶


Šárka Kadavá (kadava@leibniz-zas.de), Wim Pouw (wim.pouw@donders.ru.nl)

isolated

Info documents¶

This python coding module shows how to perform 3D motion tracking on videos made with multiple cameras. Our contribution is at providing an easy python demo and pipeline that goes through the procedure start to end and is easily adaptable for processing in large batches (and where you already have data to work with). Note, this notebook is still quite experimental and makes use of pose2sim 0.1. There is a newer version; we hope to update this notebook soon.

First, we get estimation of 2D coordinates using OpenPose. Afterwards, we calibrate the cameras using a checker board and we triangulate to get estimation of the coordinates in 3D space.

location Repository: https://github.com/sarkadava/demo_3Dtracking_pose2sim/tree/main

location Jupyter notebook: https://github.com/sarkadava/demo_3Dtracking_pose2sim/blob/main/openpose_to_pose2sim_tracking.ipynb

If you wish to record a setup with multiple cameras, you can check module: https://envisionbox.org/openpose_to_pose2sim_tracking.html

Background¶

The code below first estimates the 2D coordinates of each video. Then it uses a video with checkerboard calibration from each of the camera and finds the angles between the cameras. Then we run Pose2Sim to triangulate the three videos to get coordinates in 3D space.

Note that we largely use material provided by OpenPose and Pose2sim creators.

citation for OpenPose:

Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7291-7299).

citation for Pose2Sim:

Pagnon, D., Domalain, M., & Reveret, L. (2022). Pose2Sim: An open-source Python package for multiview markerless kinematics. Journal of Open Source Software, 7(77), 4362.

Pagnon, D., Domalain, M., & Reveret, L. (2022). Pose2Sim: an end-to-end workflow for 3D markerless sports kinematics—part 2: accuracy. Sensors, 22(7), 2712.

Pagnon, D., Domalain, M., & Reveret, L. (2021). Pose2Sim: an end-to-end workflow for 3D markerless sports kinematics—part 1: robustness. Sensors, 21(19), 6530.

Was this helpful?¶

citation for this module: Kadavá, S., Pouw, W. (2024). 3D motion tracking using multiple cameras with OpenPose and Pose2Sim [the day you viewed the site]. Retrieved from: xxx

Requirements¶

Make sure you install the requirements.txt for running this module (pip install -r requirements.txt).

After downloading the Github Repo¶

Once you have cloned the github repo, you dont have everything you need yet.

For the envision demo, please download:

  • STEP1: the bin folder which would be in the openpose folder
  • STEP2: the 25B model by following the link 'download_25model_EnvisionDemo' and place the 'pose_iter_XXXXXX.caffemodel' in the pose/body_25b/ folder.

Now the larger files that are needed to run Open Pose are present and should now run when queried in the python code below.

For more information about the 25B keypoint Openpose pose model see https://github.com/CMU-Perceptual-Computing-Lab/openpose_train/tree/master/experimental_models. We use this model as it is recommended by pose2sim.

Preparing the environment¶

First, we identify all the folders we will need.

In [1]:
import os
import subprocess
import glob

# this is the current working directorz
curfolder = os.path.abspath('./')

# this is the folder where openpose lives
openposefol = curfolder+'/openpose/'
openpose_demo_loc = curfolder+'/openpose/bin/OpenPoseDemo.exe'
# this is the model we are going to employ
model_to_employ = 'BODY_25B'

# in this folder, we have our folders with videos (you can of course have multiple videos for multibatch processing)
folderstotrack = glob.glob(curfolder+'/projectdata/*')

print(folderstotrack)
['D:\\Research_projects\\demo_3Dtracking_pose2sim/projectdata\\trial_0']

2D tracking with OpenPose¶

OpenPose usually works with running the OpenPoseDemo.exe it in command prompt or terminal. Here we essentially write commands that will work as input to terminal, but we run it in python to keep our pipeline nice and linear.

In [2]:
print(curfolder)

def runcommand(command):
    # run the command using subprocess for OPENPOSE TRACKING
    try:
        subprocess.run(command, shell=True, check=True)
    except subprocess.CalledProcessError as e:
        print(f"Command execution failed with error code {e.returncode}")
    except FileNotFoundError:
        print("The OpenPoseDemo.exe executable was not found.")

# loop over the videofolders and prepare the folder structure
for i in folderstotrack:
    print(i)
    os.chdir(openposefol)
    # identify all avi files in folder
    direc = glob.glob(i + '/raw-2d/' +'*.avi')

    # 3 cameras
    video0 = direc[0]
    video1 = direc[1]
    video2 = direc[2]

    videolist = [video0, video1, video2]
    print(videolist)

    # make new directories (and overwrite, to show the code in action)
    directories = [
        i+'/pose-2d/',
        i+'/pose-2d/pose_cam1_json/',
        i+'/pose-2d/pose_cam2_json/',
        i+'/pose-2d/pose_cam3_json/',
        i+'/pose-2d-trackingvideos/'
    ]
    for directory in directories:
        os.makedirs(directory, exist_ok=True)

    # initialize the pose2 folder
    outputfol1 = i+'/pose-2d/pose_cam1_json/'
    outputfol2 = i+'/pose-2d/pose_cam2_json/'
    outputfol3 = i+'/pose-2d/pose_cam3_json/'

    outputfollist = [outputfol1, outputfol2, outputfol3]
    
    # loop over each video and perform motion tracking
    for it, j in enumerate(outputfollist):
        #first track with openpose vid1
        openposelocation = ' ' + openpose_demo_loc + ' '
        model = '--model_pose' + ' ' + model_to_employ + ' '
        video = '--video ' + videolist[it] + ' '
        todo = '--write_json '
        outputfol = j + ' '
        videoadd = '--write_video '
        videopath = i+'/pose-2d-trackingvideos/' + 'video'+str(it)+'.avi' + ' '
        command = r' '+openposelocation+model+video+todo+outputfol+videoadd+videopath
        print('were going to send this to command prompt: ' + command)
        runcommand(command)

# after tjhis command the curfolder changed to openpose, we are now moving it back again
print( os.path.abspath('../'))
curfolder = os.path.abspath('../')
os.chdir(os.path.abspath('../'))
D:\Research_projects\demo_3Dtracking_pose2sim
D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0
['D:\\Research_projects\\demo_3Dtracking_pose2sim/projectdata\\trial_0/raw-2d\\0_1_trial_1_raw_cam1.avi', 'D:\\Research_projects\\demo_3Dtracking_pose2sim/projectdata\\trial_0/raw-2d\\0_1_trial_1_raw_cam2.avi', 'D:\\Research_projects\\demo_3Dtracking_pose2sim/projectdata\\trial_0/raw-2d\\0_1_trial_1_raw_cam3.avi']
were going to send this to command prompt:   D:\Research_projects\demo_3Dtracking_pose2sim/openpose/bin/OpenPoseDemo.exe --model_pose BODY_25B --video D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/raw-2d\0_1_trial_1_raw_cam1.avi --write_json D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/pose-2d/pose_cam1_json/ --write_video D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/pose-2d-trackingvideos/video0.avi 
were going to send this to command prompt:   D:\Research_projects\demo_3Dtracking_pose2sim/openpose/bin/OpenPoseDemo.exe --model_pose BODY_25B --video D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/raw-2d\0_1_trial_1_raw_cam2.avi --write_json D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/pose-2d/pose_cam2_json/ --write_video D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/pose-2d-trackingvideos/video1.avi 
were going to send this to command prompt:   D:\Research_projects\demo_3Dtracking_pose2sim/openpose/bin/OpenPoseDemo.exe --model_pose BODY_25B --video D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/raw-2d\0_1_trial_1_raw_cam3.avi --write_json D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/pose-2d/pose_cam3_json/ --write_video D:\Research_projects\demo_3Dtracking_pose2sim/projectdata\trial_0/pose-2d-trackingvideos/video2.avi 
D:\Research_projects\demo_3Dtracking_pose2sim

Check results openpose tracking videos¶

Openpose also allows you to visually inspect the tracking quality. Lets combine tracking videos and show them in the below python code.

In [3]:
#### Show output for deeplabcut videos side by side in one video, making one tracking video
import glob
from moviepy.editor import VideoFileClip, clips_array
from IPython.display import Video

def videos_to_render_combined(videofolder, newvideofile, oldvideotype):
    vids = glob.glob(videofolder + '*' + oldvideotype)
    #### Load the video clips
    video_clips = [VideoFileClip(vid) for vid in vids]
    
    #### Concatenate the video clips horizontally
    final_video = clips_array([video_clips])
    output_path = os.path.abspath(newvideofile)
    print(output_path)
    
    final_video.write_videofile(output_path, codec='libx265',verbose= False, logger= None)

videos_to_render_combined(videofolder = folderstotrack[0]+'/pose-2d-trackingvideos/', 
                          newvideofile = "./Images/temp/concatenated_video.mp4",
                          oldvideotype = '.avi')
# Load the video clip
clip = VideoFileClip("./Images/temp/concatenated_video.mp4")
clip.ipython_display(width=480,verbose= False, logger= None)
D:\Research_projects\demo_3Dtracking_pose2sim\Images\temp\concatenated_video.mp4
Moviepy - Building video __temp__.mp4.
Moviepy - Writing video __temp__.mp4

                                                                                                                       
Moviepy - Done !
Moviepy - video ready __temp__.mp4
Out[3]:
Sorry, seems like your browser doesn't support HTML5 audio/video

Now we have 2D coordinates for each of the videos.

3D estimation¶

We will now calibrate the cameras to get their intrinsic and extrinsic angles, which will be used to triangulate the 2D kinematics to estimate 3D coordinate.

Again, we first load all the packages and set necessary folders

In [4]:
from Pose2Sim import Pose2Sim # key package pose2sim
from trc import TRCData # this is for working with trc opensim type datafiles
#common packages
import os
import subprocess
import glob
import pandas as pd
import os
import cv2
import numpy as np
import shutil

Pose2sim presets¶

We will make use of specific settings for the pose2sim that are contained 'pose2simprjfolder'. Specifically you will find there a config.toml in the user folder, where you have important settings that have to do with the checker board calibration, the type of openpose model were using, the smoothing we want to apply. You can change these settings obviously. Please do check out the specific settings here: https://github.com/sarkadava/demo_3Dtracking_pose2sim/blob/main/Pose2Sim/Empty_project_ENVISION_settings/User/Config.toml

To fully understand them, please also go through the pose2sim documentation: https://github.com/perfanalytics/pose2sim

In [5]:
curfolder = os.path.abspath('./')
# what is the folder structure
pose2simprjfolder = curfolder+'/Pose2Sim/Empty_project_ENVISION_settings'
# here are stored videos
inputfolders = curfolder+'/projectdata/'
folderstotrack = glob.glob(curfolder+'/projectdata/*')

Calibration and triangulation¶

Now we will calibrate using the 3 videos with checkerboard. Then we also perform the triangulation to get 3D coordinates. Note that you will also get an error estimate of the triangulation.

To understand what these error values mean please go through the pose2sim documentation: https://github.com/perfanalytics/pose2sim

The calibration¶

We have calibration videos that contain a checkerboard, and we create from that images, as input for calibration via pose2sim using a checkerboard. When the calibration is performed a .toml will be created in the calibration folder. This contains the intrinsic and extrinsic angle information of the cameras. Since we are producing quite some images, this process can take a while < 20 min), so give this some time.

Triangulation¶

Subsequently the triangulation is performed and some post-processing (filtering) is applied as specified in the Config.toml that is copied into the folder from the ENVISION template.

In the pose-3d folder you will now find csv's and OpenSim files containing the 3D data. At some later point we will also go into opensim functionalities. For now the pipeline can be used to create 3D data.

In [12]:
# set framerate
framerate = 60

# How many xth frame do we extract from the calibration video? 
framepick = 3

for i in folderstotrack:
    os.chdir(i)
    # get the sessionID
    split = i.split(os.path.sep)
    trialID = split[-1]
    sessionID = trialID.split("_")[-1]
    
    print(trialID)
    # copy a folder in pose2simprjfolder and its contents to folders and add it to another folder using shutil
    source1 = pose2simprjfolder+'/User/'
    source2 = pose2simprjfolder+'/opensim/'
    print('source = ' + source1 + ' to destination: ' + i+'/')
    # copy the user and opensim folder, but only if they don't exist
    if not os.path.exists(i+'/User/'):
        shutil.copytree(source1, i+'/User/')
    if not os.path.exists(i+'/opensim/'):
        shutil.copytree(source2, i+'/opensim/')

    print('calibration started')
    # we assume  that calibration videos are only in trial_0
    if '0' in trialID and not os.path.exists(i+'/calibration/Calib_checkerboard.toml'):
    # loop through the calibration folders with a video
    # then save every 10thm frame to an image in that folder 
        calib_folders = glob.glob(i+'/calibration/*')
        print(calib_folders)
        for c in calib_folders:
            # split the path into its components
            split = c.split(os.path.sep)
            camIndex = split[-1]
            input_video = c+'/'+sessionID+'_checker_'+camIndex+'.avi'
            cap = cv2.VideoCapture(input_video)

            # check if the video file was opened successfully
            if not cap.isOpened():
                print("Error: Couldn't open the video file.")
                exit()
            output_dir = c+'/'

            # frame counter
            frame_count = 0
            print('We are now saving frames extracted from calibration videos')
            while True:
                # read the next frame
                ret, frame = cap.read()
                # Convert BGR to RGB
                if ret:
                    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
                if not ret:
                    break  # break the loop if we reach the end of the video
                
                frame_count += 1

                # save every 10th frame
                if frame_count % framepick == 0:
                    frame_filename = f"{output_dir}frame_{frame_count}.png"
                    cv2.imwrite(frame_filename, frame)
                    #print(f"Saved frame {frame_count}")

            # release the video capture object and close the video file
            cap.release()
            cv2.destroyAllWindows()
    
    # calibrate only if it is the first folder of a session
    if '0' in trialID: 
        # check if Calib_checkerboard.toml exists in the calibration folder
        if not os.path.exists(i+'/calibration/Calib_checkerboard.toml'):
            Pose2Sim.calibrateCams() # calibrate with checkerboard
    
    # else we copy the calibration files from the first folder
    else:
        # copy the calibration files from the first folder
        source = inputfolders+'trial_0'+'/calibration/'
        dest = i+'/calibration/'
        print('source = ' + source + ' to destination: ' + dest)
        # copy the calibration folder, but only with the .toml file
        if not os.path.exists(i+'/calibration/'):
            os.makedirs(i + '/calibration/')  # create the destination directory if it doesn't exist
        # if there is no toml file, copy it
        if not os.path.exists(i+'/calibration/Calib_checkerboard.toml'):
            for toml_file in glob.glob(source + 'Calib_checkerboard.toml'):
                shutil.copy(toml_file, dest)

    Pose2Sim.track2D()  # you want to keep 90% percent of the cameras?
    Pose2Sim.triangulate3D()
    Pose2Sim.filter3D()

    # check in the pose-3d folder\
    if not os.path.exists(i+'/pose-3d/'):
        os.makedirs(i+'/pose-3d/')
    posefolder = './pose-3d/'
    # check any .trc files in the folder
    trcfiles = glob.glob(posefolder + '*.trc')

    # loop through files and convert to csv
    for file in trcfiles:
        # now convert trc data to csv
        mocap_data = TRCData()
        mocap_data.load(os.path.abspath(file))

        num_frames = mocap_data['NumFrames']
        markernames = mocap_data['Markers'] # the marker names are not

        # convert movap_data to pandas dataframe
        mocap_data_df = pd.DataFrame(mocap_data, columns=mocap_data['Markers'])

        # each value within the dataframe consists a list of x,y,z coordinates, we want to seperate these out so that each marker and dimension has its own column
        # first we create a list of column names
        colnames = []
        for marker in markernames:
            colnames.append(marker + '_x') 
            colnames.append(marker + '_y')
            colnames.append(marker + '_z')

        # Create a new DataFrame to store separated values
        new_df = pd.DataFrame()

        # Iterate through each column in the original DataFrame
        for column in mocap_data_df.columns:
            # Extract the x, y, z values from each cell
            xyz = mocap_data_df[column].tolist()
            # Create a new DataFrame with the values in the cell separated into their own columns
            xyz_df = pd.DataFrame(xyz, columns=[column + '_x', column + '_y', column + '_z']) 
            # Add the new columns to the new DataFrame
            new_df = pd.concat([new_df, xyz_df], axis=1)

        # add a new time column to the new dataframe assuming the framerate was 60fps
        time = []
        ts = 0
        for i in range(0, int(num_frames)):
            ts = ts + 1/framerate
            time.append(ts)

        # add the time column to the new dataframe
        new_df['Time'] = time

        #write pd dataframe to csv
        new_df.to_csv(file+'.csv', index=False)

---------------------------------------------------------------------
Tracking of the person of interest for trial_0, for all frames.
---------------------------------------------------------------------

Project directory: D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0
trial_0
source = D:\Research_projects\demo_3Dtracking_pose2sim/Pose2Sim/Empty_project_ENVISION_settings/User/ to destination: D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0/
calibration started
100%|███████████████████████████████████████████████████████████████████████████████| 299/299 [00:00<00:00, 727.50it/s]

--> Mean reprojection error for Neck point on all frames is 21.9 px, which roughly corresponds to 85.5 mm. 
--> In average, 0.0 cameras had to be excluded to reach the demanded 40 px error threshold.

Tracked json files are stored in D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0\pose-2d-tracked.
Tracking took 0.43 s.


---------------------------------------------------------------------
Triangulation of 2D points for trial_0, for all frames.
---------------------------------------------------------------------

Project directory: D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0
100%|███████████████████████████████████████████████████████████████████████████████| 299/299 [00:00<00:00, 349.45it/s]

Mean reprojection error for RHip is 19.7 px (~ 0.077 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RKnee is 17.1 px (~ 0.067 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RAnkle is 11.7 px (~ 0.046 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RBigToe is 17.0 px (~ 0.066 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RSmallToe is 13.7 px (~ 0.054 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RHeel is 11.1 px (~ 0.043 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LHip is 19.9 px (~ 0.078 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LKnee is 19.0 px (~ 0.074 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LAnkle is 19.8 px (~ 0.077 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LBigToe is 21.6 px (~ 0.084 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LSmallToe is 22.7 px (~ 0.089 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LHeel is 20.3 px (~ 0.079 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for Neck is 21.9 px (~ 0.086 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for Head is 26.9 px (~ 0.105 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for Nose is 25.7 px (~ 0.1 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RShoulder is 22.6 px (~ 0.088 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RElbow is 21.2 px (~ 0.083 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for RWrist is 23.6 px (~ 0.092 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LShoulder is 23.3 px (~ 0.091 m), reached with 0.0 excluded cameras. 
Frames [] were interpolated.
Mean reprojection error for LElbow is 23.0 px (~ 0.09 m), reached with 0.1 excluded cameras. 
Frames ['156:161', '163:163'] were interpolated.
Mean reprojection error for LWrist is 25.9 px (~ 0.101 m), reached with 0.15 excluded cameras. 
Frames ['156:164'] were interpolated.

--> Mean reprojection error for all points on all frames is 20.4 px, which roughly corresponds to 79.7 mm. 
Cameras were excluded if likelihood was below 0.4 and if the reprojection error was above 40 px.
In average, 0.01 cameras had to be excluded to reach these thresholds.
Camera cam1 was excluded 0% of the time, and Camera cam3: 0%.

3D coordinates are stored at D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0\pose-3d\trial_0_0-299.trc.
Triangulation took 0.96 s.


---------------------------------------------------------------------
Filtering 3D coordinates for trial_0, for all frames.
---------------------------------------------------------------------

Project directory: D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0
--> Filter type: Gaussian. Standard deviation kernel: 2
Filtered 3D coordinates are stored at D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0\pose-3d\trial_0_filt_0-299.trc.
In [13]:
import pandas as pd
import glob
import os
import matplotlib.pyplot as plt

# 3D tracking checking
folderstotrack = glob.glob(os.path.abspath('../*'))
print(os.path.abspath('./*'))
files = glob.glob(folderstotrack[0]+'/pose-3d/' + '*.csv')
fileexample = files[1]
print('Well be looking at the following file: ' + fileexample)

MT_tracking = pd.read_csv(fileexample)
MT_tracking.head()
MTcols = MT_tracking.shape[0]

# Get all columns except 'Time'
variables = [column for column in MT_tracking.columns if column != 'Time']

# Number of panels if number of columns
num_panels = 20

# Create subplots with two panels (one for each group of three variables)
fig, axes = plt.subplots(num_panels, 1, figsize=(10, 60))

# Plot each group of three variables
for i in range(num_panels):
    # Determine the variables to plot in this panel
    variables_to_plot = variables[i * 3: (i + 1) * 3]
    
    # Plot each variable in this panel
    for var in variables_to_plot:
        axes[i].plot(MT_tracking['Time'], MT_tracking[var], label=var)
    axes[i].legend()

# Adjust layout to prevent overlapping
plt.tight_layout()
plt.show()
D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0\*
Well be looking at the following file: D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0/pose-3d\trial_0_filt_0-299.trc.csv

Quality checking¶

Below we make a final animation of the 3D tracking, together with the OpenPose tracking.

In [28]:
# import pandas as pd
import glob
import os
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.animation import FuncAnimation

# 3D tracking checking
folderstotrack = glob.glob(os.path.abspath('../*'))
print(os.path.abspath('./*'))
files = glob.glob(folderstotrack[0]+'/pose-3d/' + '*.csv')
fileexample = files[1]
print('Well be looking at the following file: ' + fileexample)

MT_tracking = pd.read_csv(fileexample)

# Create a figure and axis
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111, projection='3d')

# Define the number of frames
num_frames = len(MT_tracking)

# Define the scatter plot
scatter = ax.scatter([], [], [], marker='o')

# Update function for animation
def update(frame):
    ax.clear()
    ax.set_xlabel('X Label')
    ax.set_ylabel('Y Label')
    ax.set_zlabel('Z Label')
    ax.set_title('3D Animation')
    
    # Set the limits of the axes based on the maximum values
    ax.set_xlim3d(-1, 1.5)
    ax.set_ylim3d(0, 2.5)
    ax.set_zlim3d(-2.5, 0) #we need to flip the vertical
    
    # Plot the data for the current frame
    frame_data = MT_tracking.iloc[frame]
    x = frame_data.filter(like='_z')
    y = frame_data.filter(like='_y')
    z = frame_data.filter(like='_x')
    scatter = ax.scatter(x, y, z*-1, marker='o') #we need to flip the vertical
    return scatter,

# Create the animation
ani = FuncAnimation(fig, update, frames=num_frames, interval=1000/60)

# Save the animation as a video
ani.save('../../Images/temp/3d_animation.mp4', writer='ffmpeg')
print('saved the animation!')

#### Now also combine the animated video with the tracking
videos_to_render_combined(videofolder = '../../Images/temp/', 
                          newvideofile = "../../Images/temp2/combined_video.mp4",
                          oldvideotype = '.mp4')
# Load the video clip
clip = VideoFileClip("../../Images/temp2/combined_video.mp4")
clip.ipython_display(width=960,verbose= False, logger= None)
Animation.save using <class 'matplotlib.animation.FFMpegWriter'>
MovieWriter._run: running command: ffmpeg -f rawvideo -vcodec rawvideo -s 1000x600 -pix_fmt rgba -r 59.99999999999999 -loglevel error -i pipe: -vcodec h264 -pix_fmt yuv420p -y ../../Images/temp/3d_animation.mp4
D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0\*
Well be looking at the following file: D:\Research_projects\demo_3Dtracking_pose2sim\projectdata\trial_0/pose-3d\trial_0_filt_0-299.trc.csv
saved the animation!
D:\Research_projects\demo_3Dtracking_pose2sim\Images\temp2\combined_video.mp4
Moviepy - Building video __temp__.mp4.
Moviepy - Writing video __temp__.mp4

                                                                                                                       
Moviepy - Done !
Moviepy - video ready __temp__.mp4
Out[28]:
Sorry, seems like your browser doesn't support HTML5 audio/video

Work in in progress¶

There are additional steps in pose2sim to now scale your kinematics on a skeletal model and infer the joint angles, as well as to then go into kinetic measures. We will stop here, but hope to add this to the current demo too. Please see https://github.com/perfanalytics/pose2sim for new features as well!